81 research outputs found

    Energy-efficient bandwidth allocation for multiuser scalable video streaming over WLAN

    Get PDF
    We consider the problem of packet scheduling for the transmission of multiple video streams over a wireless local area network (WLAN). A cross-layer optimization framework is proposed to minimize the wireless transceiver energy consumption while meeting the user required visual quality constraints. The framework relies on the IEEE 802.11 standard and on the embedded bitstream structure of the scalable video coding scheme. It integrates an application-level video quality metric as QoS constraint (instead of a communication layer quality metric) with energy consumption optimization through link layer scaling and sleeping. Both energy minimization and min-max energy optimization strategies are discussed. Simulation results demonstrate significant energy gains compared to the state-of-the-art approaches

    A theoretical framework for soft-information-based synchronization in iterative (Turbo) receivers

    Get PDF
    This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such “turbo-estimation” approach can be regarded as a special case of the expectation-maximization (EM) algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal

    A Lightweight Joint RIS/BS Configuration Scheme

    Get PDF
    International audienceThe integration of RIS elements in upcoming 6G networks appears a major breakthrough to extend network coverage and capacity. This paper introduces a new link-layer scheme for RIS-enabled communications building on existing models for the physical layer of RIS technologies. The scheme is able to integrate the selection of precoders/beams in a codebook and scheduling of UEs at once. Furthermore, elements for the integration of the proposed scheme in current 3GPP-5G specifications are addressed. The scheduler combines a slow mechanism operating at the downlink OFDMA frame scale with a standard proportional fair scheduler operating at the OFDMA slot scale and accommodating both LOS and non-LOS UEs. We introduce an optimization framework for the proposed scheduler whose performance is hence simulated in a reference scenario. The spectral efficiency figures in our tests confirm a large gain of the scheme against a baseline direct path scheme. Finally, the scheduler optimization permits to achieve a further improvement of 15-20% for non line-of-sight users

    Luciola Hypertelescope Space Observatory

    Get PDF
    Luciola is a large (one kilometer) "multi-aperture densified-pupil imaging interferometer", or "hypertelescope" employing many small apertures, rather than a few large ones, for obtaining direct snapshot images with a high information content. A diluted collector mirror, deployed in space as a flotilla of small mirrors, focuses a sky image which is exploited by several beam-combiner spaceships. Each contains a pupil densifier micro-lens array to avoid the diffractive spread and image attenuation caused by the small sub-apertures. The elucidation of hypertelescope imaging properties during the last decade has shown that many small apertures tend to be far more efficient, regarding the science yield, than a few large ones providing a comparable collecting area. For similar underlying physical reasons, radio-astronomy has also evolved in the direction of many-antenna systems such as the proposed Low Frequency Array having hundreds of thousands of individual receivers . With its high limiting magnitude, reaching the mv=30 limit of HST when 100 collectors of 25cm will match its collecting area, high-resolution direct imaging in multiple channels, broad spectral coverage from the 1200 Angstrom ultra-violet to the 20 micron infra-red, apodization, coronagraphic and spectroscopic capabilities, the proposed hypertelescope observatory addresses very broad and innovative science covering different areas of ESA s Cosmic Vision program. In the initial phase, a focal spacecraft covering the UV to near IR spectral range of EMCCD photon-counting cameras ( currently 200 to 1000nm), will image details on the surface of many stars, as well as their environment, including multiple stars and clusters. Spectra will be obtained for each resel. It will also image neutron star, black-hole and micro-quasar candidates, as well as active galactic nuclei, quasars, gravitational lenses, and other Cosmic Vision targets observable with the initial modest crowding limit. With subsequent upgrade missions, the spectral coverage can be extended from 120nm to 20 microns, using four detectors carried by two to four focal spacecraft. The number of collector mirrors in the flotilla can also be increased from 12 to 100 and possibly 1,000. The imaging and spectroscopy of habitable exoplanets in the mid infra-red then becomes feasible once the collecting area reaches 6m2 , using a specialized mid infra-red focal spacecraft. Calculations ( Boccaletti et al., 2000) have shown that hypertelescope coronagraphy has unequalled sensitivity for detecting, at mid infra-red wavelengths, faint exoplanets within the exo-zodiacal glare. Later upgrades will enable the more difficult imaging and spectroscopy of these faint objects at visible wavelengths, using refined techniques of adaptive coronagraphy (Labeyrie. & Le Coroller, 2004). Together, the infra-red and visible spectral data carry rich information on the possible presence of life. The close environment of the central black-hole in the Milky Way will be imageable with unprecedented detail in the near infra-red . Cosmological imaging of remote galaxies at the limit of the known universe is also expected, from the ultra-violet to the near infra-red, following the first upgrade, and with greatly increasing sensitivity through successive upgrades. These areas will indeed greatly benefit from the upgrades, in terms of dynamic range, limiting complexity of the objects to be imaged, size of the elementary Direct Imaging Field , and limiting magnitude, approaching that of an 8-meter space telescope when 1000 apertures of 25cm are installed. Similar gains will occur for addressing fundamental problems in physics and cosmology, particularly when observing neutron stars and black holes, single or binary, including the giant black holes, with accretion disks and jets, in active galactic nuclei beyond the Milky Way. Gravitational lensing and micro-lensing patterns, including time-variable patterns and perhaps millisecond lensing flasheshich may be beamed by diffraction from sub-stellar masses at sub-parsec distances (Labeyrie, 1994) , will also be observable initially in the favourable cases, and upgrades will greatly improve the number of observable objects. The observability of gravitational waves emitted by binary lensing masses, in the form of modulated lensing patterns, is a debated issue ( Ragazzoni et al., 2003) but will also become addressable observationally. The technology readiness of Luciola approaches levels where low-orbit testing and stepwise implementation will become feasible in the 2015-2025 time frame. For the following decades beyond 2020, once accurate formation flying techniques will be mastered, much larger hypertelescopes such as the proposed 100km Exo-Earth Imager and the 100,000 km Neutron Star Imager should also become feasible. Luciola is therefore also seen as a precursor toward such very powerful instruments

    Cross-layer reduction of wireless network card idle time to optimize energy consumption of pull thin client protocols

    Get PDF
    Thin client computing trades local processing for network bandwidth consumption by offloading application logic to remote servers. User input and display updates are exchanged between client and server through a thin client protocol. On wireless devices, the thin client protocol traffic can lead to a significantly higher power consumption of the radio interface. In this article, a cross-layer framework is presented that transitions the wireless network interface card (WNIC) to the energy-conserving sleep mode when no traffic from the server is expected. The approach is validated for different wireless channel conditions, such as path loss and available bandwidth, as well as for different network roundtrip time values. Using this cross-layer algorithm for sample scenario with a remote text editor, and through experiments based on actual user traces, a reduction of the WNIC energy consumption of up to 36.82% is obtained, without degrading the application's reactivity

    Architectures for Cognitive Radio Testbeds and Demonstrators – An Overview

    Get PDF
    Wireless communication standards are developed at an ever-increasing rate of pace, and significant amounts of effort is put into research for new communication methods and concepts. On the physical layer, such topics include MIMO, cooperative communication, and error control coding, whereas research on the medium access layer includes link control, network topology, and cognitive radio. At the same time, implementations are moving from traditional fixed hardware architectures towards software, allowing more efficient development. Today, field-programmable gate arrays (FPGAs) and regular desktop computers are fast enough to handle complete baseband processing chains, and there are several platforms, both open-source and commercial, providing such solutions. The aims of this paper is to give an overview of five of the available platforms and their characteristics, and compare the features and performance measures of the different systems

    Novel turbo-equalization techniques for coded digital transmission

    No full text
    Turbo-codes have attracted an explosion of interest since their discovery in 1993: for the first time, the gap with the limits predicted by information and coding theory was on the way to be bridged. The astonishing performance of turbo-codes relies on two major concepts: code concatenation so as to build a powerful global code, and iterative decoding in order to efficiently approximate the optimal decoding process. As a matter of fact, the techniques involved in turbo coding and in the associated iterative decoding strategy can be generalized to other problems frequently encountered in digital communications. This results in a so-called turbo principle. A famous application of the latter principle is the communication scheme referred to as turbo-equalization: when considering coded transmission over a frequency-selective channel, it enables to jointly and efficiently perform the equalization and decoding tasks required at the receiver. This leads by the way to significant performance improvement with regard to conventional disjoint approaches. In this context, the purpose of the present thesis is the derivation and the performance study of novel digital communication receivers, which perform iterative joint detection and decoding by means of the turbo principle. The binary turbo-equalization scheme is considered as a starting point, and improved in several ways, which are detailed throughout this work. Emphasis is always put on the performance analysis of the proposed communication systems, so as to reach insight about their behavior. Practical considerations are also taken into account, in order to provide realistic, tractable, and efficient solutions.(FSA 3)--UCL, 200

    Bit-interleaved turbo equalization over static frequency-selective channels: Constellation mapping impact

    No full text
    The purpose is to assess the performance of bit-interleaved turbo equalization (TE) over static frequency-selective channels. The asymptotic performance is therefore first pointed out, emphasizing the fundamental role played by the constellation mapping. This specific feature is then further analyzed using the extrinsic information-transfer chart technique, leading to an efficient optimization tool. This finally enables showing that bit-interleaved TE can outperform its symbol-interleaved counterpart
    • …
    corecore